Introduction to Open Data Science - Course Project

About the project

I am looking forward to learn a lot about machine learning and R during this course. My GitHUb is https://github.com/hhelskya/IODS-project

# This is a so-called "R chunk" where you can write R code.

date()
## [1] "Wed Dec 02 16:10:40 2020"

Cannot wait to learn more.


Regression and model validation

Describe the work this week and summarize your learning.

date()
## [1] "Wed Dec 02 16:10:40 2020"
ds <- read.csv("C:/Users/Heli/Heli/HY/Introduction to Open Data Science/Projects/IODS-project/data\\learning2014.csv", header=TRUE)
ds$Points
##   [1] 25 12 24 10 22 21 21 31 24 26 31 31 23 25 21 31 20 22  9 24 28 30 24  9 26
##  [26] 32 32 33 29 30 19 23 19 12 10 11 20 26 31 20 23 12 24 17 29 23 28 31 23 25
##  [51] 18 19 22 25 21  9 28 25 29 33 33 25 18 22 17 25 28 22 26 11 29 22 21 28 33
##  [76] 16 31 22 31 23 26 12 26 31 19 30 12 17 18 19 21 24 28 17 18 17 23 26 28 31
## [101] 27 25 23 21 27 28 23 21 25 11 19 24 28 21 24 24 20 19 30 22 16 16 19 30 23
## [126] 19 18 28 21 19 27 24 21 20 28 12 21 28 31 18 25 19 21 16  7 21 17 22 18 25
## [151] 24 23 23 26 12 32 22 20 21 23 20 28 31 18 30 19
dim(ds)
## [1] 166   7
str(ds)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : chr  "F" "M" "F" "M" ...
##  $ Age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  1.5 1.67 1.5 2.17 1.83 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ Points  : int  25 12 24 10 22 21 21 31 24 26 ...
summary(ds)
##     gender               Age           attitude          deep      
##  Length:166         Min.   :17.00   Min.   :1.000   Min.   :1.583  
##  Class :character   1st Qu.:21.00   1st Qu.:1.500   1st Qu.:3.333  
##  Mode  :character   Median :22.00   Median :1.667   Median :3.667  
##                     Mean   :25.51   Mean   :1.883   Mean   :3.680  
##                     3rd Qu.:27.00   3rd Qu.:2.000   3rd Qu.:4.083  
##                     Max.   :55.00   Max.   :4.667   Max.   :4.917  
##       stra            surf           Points     
##  Min.   :1.250   Min.   :1.583   Min.   : 7.00  
##  1st Qu.:2.625   1st Qu.:2.417   1st Qu.:19.00  
##  Median :3.188   Median :2.833   Median :23.00  
##  Mean   :3.121   Mean   :2.787   Mean   :22.72  
##  3rd Qu.:3.625   3rd Qu.:3.167   3rd Qu.:27.75  
##  Max.   :5.000   Max.   :4.333   Max.   :33.00

The dataset contains 166 rows and 7 columns. It includes the gender (F/M), the age, the points and some combination variables built using mean. These variables and the original variables they are combined are:

attitude: Aa, Ab, Ac, Ad, Ae, Af deep: D03+D11+D19+D27, D07+D14+D22+D30, D06+D15+D23+D31 surf: SU02+SU10+SU18+SU26, SU05+SU13+SU21+SU29, SU08+SU16+SU24+SU32 stra: ST01+ST09+ST17+ST25, ST04+ST12+ST20+ST28

Gender is of type chr, age and points are of type int and the rest of the variables are of type num as shown here: ‘data.frame’: 166 obs. of 7 variables: $ X.gender. : chr “"F"” “"M"” “"F"” “"M"” … $ X.Age. : int 53 55 49 53 49 38 50 37 37 42 … $ X.attitude.: num 1.5 1.67 1.5 2.17 1.83 … $ X.deep. : num 3.58 2.92 3.5 3.5 3.67 … $ X.stra. : num 3.38 2.75 3.62 3.12 3.62 … $ X.surf. : num 2.58 3.17 2.25 2.25 2.83 … $ X.Points. : int 25 12 24 10 22 21 21 31 24 26 …

The minimum age in the dataset is 17, maximum 55. Values for attitude are between 1.000-4.667, for deep between 1.583-4.917, stra 1.250-5.000, and surf 1.583-4.333. The minimum points are 7.00 and the maximum points are 33.00. The table below shows also the 1st quadrant, median, mean, and 3. quadrant for each variable.

X.gender. X.Age. X.attitude. X.deep. X.stra. X.surf.
Length:166 Min. :17.00 Min. :1.000 Min. :1.583 Min. :1.250 Min. :1.583
Class :character 1st Qu.:21.00 1st Qu.:1.500 1st Qu.:3.333 1st Qu.:2.625 1st Qu.:2.417
Mode :character Median :22.00 Median :1.667 Median :3.667 Median :3.188 Median :2.833
Mean :25.51 Mean :1.883 Mean :3.680 Mean :3.121 Mean :2.787
3rd Qu.:27.00 3rd Qu.:2.000 3rd Qu.:4.083 3rd Qu.:3.625 3rd Qu.:3.167
Max. :55.00 Max. :4.667 Max. :4.917 Max. :5.000 Max. :4.333
X.Points.
Min. : 7.00
1st Qu.:19.00
Median :23.00
Mean :22.72
3rd Qu.:27.75
Max. :33.00

pairs(ds[-1])

The scatter plot above describes the relationships between the variables. We have removed gender from the scatter plot.

library(GGally)
## Loading required package: ggplot2
## Registered S3 method overwritten by 'GGally':
##   method from   
##   +.gg   ggplot2
library(ggplot2)
p <- ggpairs(ds, mapping = aes(), lower = list(combo = wrap("facethist", bins = 20)))
p

Above you can see a more advanced plot describing for instance the correlation of different variables to each others and the distribuiton of each variable.

# a scatter plot of points versus attitude
library(ggplot2)
# colnames(learning2014)[7] <- "points"

qplot(attitude, Points, data = ds) + geom_smooth(method = "lm")
## `geom_smooth()` using formula 'y ~ x'

my_model <- lm(Points  ~ attitude + deep + Age, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ attitude + deep + Age, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.0562  -3.7634   0.2952   4.6517  10.7479 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 25.26830    3.69183   6.844  1.5e-10 ***
## attitude    -0.19559    0.61906  -0.316    0.752    
## deep        -0.10027    0.83390  -0.120    0.904    
## Age         -0.07111    0.05940  -1.197    0.233    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.921 on 162 degrees of freedom
## Multiple R-squared:  0.009356,   Adjusted R-squared:  -0.008989 
## F-statistic:  0.51 on 3 and 162 DF,  p-value: 0.6759

Residuals explain the minimum and maximum values that are -16.0562 and 10.7479. It also shows that the median is 0.2952, the first quatrain is -3.7634 and the third is 4.6517. The t-value measures the size of the difference relative to the variation, so the bigger the number the greater the evidence against the null hypothesis. Age seems to have the biggest difference (-1.197) but still not big enough to refute the 0-hypotheses. The t-value cannot refute null hypotheses, not statistically significant. p-value (Pr) is smallest with Age (0.233) but still bigger than 0.05 so none of these are statistically significant. Based on the results, null hypotheses cannot be refute. Residual standard error is 5.921. R-squared values indicate explain how well the variance is explained by the model. Multiple R-squared (0.009356) and Adjusted R-squared (-0.008989) are almost the same and explaind the variant very poorely.

my_model <- lm(Points  ~ stra + surf +gender, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ stra + surf + gender, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -15.2430  -3.4525   0.3105   4.2753  10.2382 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  22.2924     3.4464   6.468 1.12e-09 ***
## stra          1.0936     0.6022   1.816   0.0712 .  
## surf         -1.2249     0.8752  -1.400   0.1635    
## genderM       1.2599     0.9736   1.294   0.1974    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.81 on 162 degrees of freedom
## Multiple R-squared:  0.0462, Adjusted R-squared:  0.02854 
## F-statistic: 2.616 on 3 and 162 DF,  p-value: 0.05295
confint(my_model)
##                   2.5 %    97.5 %
## (Intercept) 15.48661602 29.098118
## stra        -0.09564338  2.282801
## surf        -2.95303542  0.503321
## genderM     -0.66254961  3.182448
my_model <- lm(Points  ~ attitude, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ attitude, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -15.7533  -3.7392   0.2186   4.9615  10.3311 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  23.0346     1.2479  18.459   <2e-16 ***
## attitude     -0.1688     0.6165  -0.274    0.785    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.911 on 164 degrees of freedom
## Multiple R-squared:  0.0004569,  Adjusted R-squared:  -0.005638 
## F-statistic: 0.07496 on 1 and 164 DF,  p-value: 0.7846

NOt significant

my_model <- lm(Points  ~ deep, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ deep, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -15.6913  -3.6935   0.2862   4.9957  10.3537 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  23.1141     3.0908   7.478 4.31e-12 ***
## deep         -0.1080     0.8306  -0.130    0.897    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.913 on 164 degrees of freedom
## Multiple R-squared:  0.000103,   Adjusted R-squared:  -0.005994 
## F-statistic: 0.01689 on 1 and 164 DF,  p-value: 0.8967

Not significant

my_model <- lm(Points  ~ Age, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ Age, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.0360  -3.7531   0.0958   4.6762  10.8128 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 24.52150    1.57339  15.585   <2e-16 ***
## Age         -0.07074    0.05901  -1.199    0.232    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.887 on 164 degrees of freedom
## Multiple R-squared:  0.008684,   Adjusted R-squared:  0.00264 
## F-statistic: 1.437 on 1 and 164 DF,  p-value: 0.2324

Not significant

my_model <- lm(Points  ~ stra, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ stra, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.5581  -3.8198   0.1042   4.3024  10.1394 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   19.233      1.897  10.141   <2e-16 ***
## stra           1.116      0.590   1.892   0.0603 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.849 on 164 degrees of freedom
## Multiple R-squared:  0.02135,    Adjusted R-squared:  0.01538 
## F-statistic: 3.578 on 1 and 164 DF,  p-value: 0.06031

Not significant

my_model <- lm(Points  ~ surf, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ surf, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -14.6539  -3.3744   0.3574   4.4734  10.2234 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  27.2017     2.4432  11.134   <2e-16 ***
## surf         -1.6091     0.8613  -1.868   0.0635 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.851 on 164 degrees of freedom
## Multiple R-squared:  0.02084,    Adjusted R-squared:  0.01487 
## F-statistic:  3.49 on 1 and 164 DF,  p-value: 0.06351

Not significant

my_model <- lm(Points  ~ gender, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ gender, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -15.3273  -3.3273   0.5179   4.5179  10.6727 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  22.3273     0.5613  39.776   <2e-16 ***
## genderM       1.1549     0.9664   1.195    0.234    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.887 on 164 degrees of freedom
## Multiple R-squared:  0.008632,   Adjusted R-squared:  0.002587 
## F-statistic: 1.428 on 1 and 164 DF,  p-value: 0.2338

Not significant. We choose stra for further investigation. (biggest t-value)

my_model <- lm(Points  ~ stra, data = ds )
plot(ds$stra,ds$Points)
abline(my_model, col="red")

my_model
## 
## Call:
## lm(formula = Points ~ stra, data = ds)
## 
## Coefficients:
## (Intercept)         stra  
##      19.234        1.116
summary(my_model)
## 
## Call:
## lm(formula = Points ~ stra, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.5581  -3.8198   0.1042   4.3024  10.1394 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   19.233      1.897  10.141   <2e-16 ***
## stra           1.116      0.590   1.892   0.0603 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.849 on 164 degrees of freedom
## Multiple R-squared:  0.02135,    Adjusted R-squared:  0.01538 
## F-statistic: 3.578 on 1 and 164 DF,  p-value: 0.06031
qqnorm(ds$Points, pch = 1, frame = FALSE)
qqline(ds$Points, col = "steelblue", lwd = 2)

plot(lm(Points~stra,data=ds)) 

The assumption is that the strategic approach (stra) defines the overall points (Points): Points is modelled as a linear combination of stra. Residual is the difference between an observed value of the response variable and the fitted value, the error. Residuals can be used to define the validity of the model assumptions. There are several assumptions for the errors. First of them is that they are normally distributed. QQ-plot of the residuals is a method to explore the assumption that the errors of the model are normally distributed. The better the data points aline with the line the better they are normally distributed. In our example QQ-plot the beginning and end of the line do not follow but in the middle the data point are quite well following the line. We could say the errors are well fitting the line with values -1 and 1.5, reasonably well with values less than -1, not so well fitting with values larger than 1.5. Therefore the errors are reasonable well normally distributed. The second assumption is the constant variance of errors, the size of the errors is not dependent on the explanatory variables. This can be explored with a scatter plot of residuals versus model predictions. Any patter in the scatter plot implies that there is a problem with this assumption. In our example there is no patter to be found and therefore this assumption is correct. Leverage is used to measure how much impact an observation has to the model. Residuals vs leverage plot can be used to find observations that have unusually high impact, the outliers. In our example there are no outliers.


Logistic regression

date()
## [1] "Wed Dec 02 16:10:50 2020"
alc <- read.csv("C:/Users/Heli/Heli/HY/Introduction to Open Data Science/Projects/IODS-project/data\\alc.csv",sep=",", header=TRUE)

colnames(alc)
##  [1] "school"     "sex"        "age"        "address"    "famsize"   
##  [6] "Pstatus"    "Medu"       "Fedu"       "Mjob"       "Fjob"      
## [11] "reason"     "nursery"    "internet"   "guardian"   "traveltime"
## [16] "studytime"  "failures"   "schoolsup"  "famsup"     "paid"      
## [21] "activities" "higher"     "romantic"   "famrel"     "freetime"  
## [26] "goout"      "Dalc"       "Walc"       "health"     "absences"  
## [31] "G1"         "G2"         "G3"         "alc_use"    "high_use"
dim(alc)
## [1] 382  35
str(alc)
## 'data.frame':    382 obs. of  35 variables:
##  $ school    : chr  "GP" "GP" "GP" "GP" ...
##  $ sex       : chr  "F" "F" "F" "F" ...
##  $ age       : int  18 17 15 15 16 16 16 17 15 15 ...
##  $ address   : chr  "U" "U" "U" "U" ...
##  $ famsize   : chr  "GT3" "GT3" "LE3" "GT3" ...
##  $ Pstatus   : chr  "A" "T" "T" "T" ...
##  $ Medu      : int  4 1 1 4 3 4 2 4 3 3 ...
##  $ Fedu      : int  4 1 1 2 3 3 2 4 2 4 ...
##  $ Mjob      : chr  "at_home" "at_home" "at_home" "health" ...
##  $ Fjob      : chr  "teacher" "other" "other" "services" ...
##  $ reason    : chr  "course" "course" "other" "home" ...
##  $ nursery   : chr  "yes" "no" "yes" "yes" ...
##  $ internet  : chr  "no" "yes" "yes" "yes" ...
##  $ guardian  : chr  "mother" "father" "mother" "mother" ...
##  $ traveltime: int  2 1 1 1 1 1 1 2 1 1 ...
##  $ studytime : int  2 2 2 3 2 2 2 2 2 2 ...
##  $ failures  : int  0 0 2 0 0 0 0 0 0 0 ...
##  $ schoolsup : chr  "yes" "no" "yes" "no" ...
##  $ famsup    : chr  "no" "yes" "no" "yes" ...
##  $ paid      : chr  "no" "no" "yes" "yes" ...
##  $ activities: chr  "no" "no" "no" "yes" ...
##  $ higher    : chr  "yes" "yes" "yes" "yes" ...
##  $ romantic  : chr  "no" "no" "no" "yes" ...
##  $ famrel    : int  4 5 4 3 4 5 4 4 4 5 ...
##  $ freetime  : int  3 3 3 2 3 4 4 1 2 5 ...
##  $ goout     : int  4 3 2 2 2 2 4 4 2 1 ...
##  $ Dalc      : int  1 1 2 1 1 1 1 1 1 1 ...
##  $ Walc      : int  1 1 3 1 2 2 1 1 1 1 ...
##  $ health    : int  3 3 3 5 5 5 3 1 1 5 ...
##  $ absences  : int  5 3 8 1 2 8 0 4 0 0 ...
##  $ G1        : int  2 7 10 14 8 14 12 8 16 13 ...
##  $ G2        : int  8 8 10 14 12 14 12 9 17 14 ...
##  $ G3        : int  8 8 11 14 12 14 12 10 18 14 ...
##  $ alc_use   : num  1 1 2.5 1 1.5 1.5 1 1 1 1 ...
##  $ high_use  : logi  FALSE FALSE TRUE FALSE FALSE FALSE ...
summary(alc)
##     school              sex                 age          address         
##  Length:382         Length:382         Min.   :15.00   Length:382        
##  Class :character   Class :character   1st Qu.:16.00   Class :character  
##  Mode  :character   Mode  :character   Median :17.00   Mode  :character  
##                                        Mean   :16.59                     
##                                        3rd Qu.:17.00                     
##                                        Max.   :22.00                     
##    famsize            Pstatus               Medu            Fedu      
##  Length:382         Length:382         Min.   :0.000   Min.   :0.000  
##  Class :character   Class :character   1st Qu.:2.000   1st Qu.:2.000  
##  Mode  :character   Mode  :character   Median :3.000   Median :3.000  
##                                        Mean   :2.806   Mean   :2.565  
##                                        3rd Qu.:4.000   3rd Qu.:4.000  
##                                        Max.   :4.000   Max.   :4.000  
##      Mjob               Fjob              reason            nursery         
##  Length:382         Length:382         Length:382         Length:382        
##  Class :character   Class :character   Class :character   Class :character  
##  Mode  :character   Mode  :character   Mode  :character   Mode  :character  
##                                                                             
##                                                                             
##                                                                             
##    internet           guardian           traveltime      studytime    
##  Length:382         Length:382         Min.   :1.000   Min.   :1.000  
##  Class :character   Class :character   1st Qu.:1.000   1st Qu.:1.000  
##  Mode  :character   Mode  :character   Median :1.000   Median :2.000  
##                                        Mean   :1.448   Mean   :2.037  
##                                        3rd Qu.:2.000   3rd Qu.:2.000  
##                                        Max.   :4.000   Max.   :4.000  
##     failures       schoolsup            famsup              paid          
##  Min.   :0.0000   Length:382         Length:382         Length:382        
##  1st Qu.:0.0000   Class :character   Class :character   Class :character  
##  Median :0.0000   Mode  :character   Mode  :character   Mode  :character  
##  Mean   :0.2016                                                           
##  3rd Qu.:0.0000                                                           
##  Max.   :3.0000                                                           
##   activities           higher            romantic             famrel     
##  Length:382         Length:382         Length:382         Min.   :1.000  
##  Class :character   Class :character   Class :character   1st Qu.:4.000  
##  Mode  :character   Mode  :character   Mode  :character   Median :4.000  
##                                                           Mean   :3.937  
##                                                           3rd Qu.:5.000  
##                                                           Max.   :5.000  
##     freetime        goout            Dalc            Walc           health     
##  Min.   :1.00   Min.   :1.000   Min.   :1.000   Min.   :1.000   Min.   :1.000  
##  1st Qu.:3.00   1st Qu.:2.000   1st Qu.:1.000   1st Qu.:1.000   1st Qu.:3.000  
##  Median :3.00   Median :3.000   Median :1.000   Median :2.000   Median :4.000  
##  Mean   :3.22   Mean   :3.113   Mean   :1.482   Mean   :2.296   Mean   :3.573  
##  3rd Qu.:4.00   3rd Qu.:4.000   3rd Qu.:2.000   3rd Qu.:3.000   3rd Qu.:5.000  
##  Max.   :5.00   Max.   :5.000   Max.   :5.000   Max.   :5.000   Max.   :5.000  
##     absences          G1              G2              G3           alc_use     
##  Min.   : 0.0   Min.   : 2.00   Min.   : 4.00   Min.   : 0.00   Min.   :1.000  
##  1st Qu.: 1.0   1st Qu.:10.00   1st Qu.:10.00   1st Qu.:10.00   1st Qu.:1.000  
##  Median : 3.0   Median :12.00   Median :12.00   Median :12.00   Median :1.500  
##  Mean   : 4.5   Mean   :11.49   Mean   :11.47   Mean   :11.46   Mean   :1.889  
##  3rd Qu.: 6.0   3rd Qu.:14.00   3rd Qu.:14.00   3rd Qu.:14.00   3rd Qu.:2.500  
##  Max.   :45.0   Max.   :18.00   Max.   :18.00   Max.   :18.00   Max.   :5.000  
##   high_use      
##  Mode :logical  
##  FALSE:268      
##  TRUE :114      
##                 
##                 
## 

My assumption is that going out (goout), and absences (absences) increase the consumption of alcohol whereas the more time spent on studies (studytime) and other activities (activities) the lower the consumption is.

library(tidyr); library(dplyr); library(ggplot2)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
glimpse(alc)
## Rows: 382
## Columns: 35
## $ school     <chr> "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "G...
## $ sex        <chr> "F", "F", "F", "F", "F", "M", "M", "F", "M", "M", "F", "...
## $ age        <int> 18, 17, 15, 15, 16, 16, 16, 17, 15, 15, 15, 15, 15, 15, ...
## $ address    <chr> "U", "U", "U", "U", "U", "U", "U", "U", "U", "U", "U", "...
## $ famsize    <chr> "GT3", "GT3", "LE3", "GT3", "GT3", "LE3", "LE3", "GT3", ...
## $ Pstatus    <chr> "A", "T", "T", "T", "T", "T", "T", "A", "A", "T", "T", "...
## $ Medu       <int> 4, 1, 1, 4, 3, 4, 2, 4, 3, 3, 4, 2, 4, 4, 2, 4, 4, 3, 3,...
## $ Fedu       <int> 4, 1, 1, 2, 3, 3, 2, 4, 2, 4, 4, 1, 4, 3, 2, 4, 4, 3, 2,...
## $ Mjob       <chr> "at_home", "at_home", "at_home", "health", "other", "ser...
## $ Fjob       <chr> "teacher", "other", "other", "services", "other", "other...
## $ reason     <chr> "course", "course", "other", "home", "home", "reputation...
## $ nursery    <chr> "yes", "no", "yes", "yes", "yes", "yes", "yes", "yes", "...
## $ internet   <chr> "no", "yes", "yes", "yes", "no", "yes", "yes", "no", "ye...
## $ guardian   <chr> "mother", "father", "mother", "mother", "father", "mothe...
## $ traveltime <int> 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 3, 1, 2, 1, 1, 1, 3, 1,...
## $ studytime  <int> 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 3, 1, 2, 3, 1, 3, 2, 1,...
## $ failures   <int> 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3,...
## $ schoolsup  <chr> "yes", "no", "yes", "no", "no", "no", "no", "yes", "no",...
## $ famsup     <chr> "no", "yes", "no", "yes", "yes", "yes", "no", "yes", "ye...
## $ paid       <chr> "no", "no", "yes", "yes", "yes", "yes", "no", "no", "yes...
## $ activities <chr> "no", "no", "no", "yes", "no", "yes", "no", "no", "no", ...
## $ higher     <chr> "yes", "yes", "yes", "yes", "yes", "yes", "yes", "yes", ...
## $ romantic   <chr> "no", "no", "no", "yes", "no", "no", "no", "no", "no", "...
## $ famrel     <int> 4, 5, 4, 3, 4, 5, 4, 4, 4, 5, 3, 5, 4, 5, 4, 4, 3, 5, 5,...
## $ freetime   <int> 3, 3, 3, 2, 3, 4, 4, 1, 2, 5, 3, 2, 3, 4, 5, 4, 2, 3, 5,...
## $ goout      <int> 4, 3, 2, 2, 2, 2, 4, 4, 2, 1, 3, 2, 3, 3, 2, 4, 3, 2, 5,...
## $ Dalc       <int> 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2,...
## $ Walc       <int> 1, 1, 3, 1, 2, 2, 1, 1, 1, 1, 2, 1, 3, 2, 1, 2, 2, 1, 4,...
## $ health     <int> 3, 3, 3, 5, 5, 5, 3, 1, 1, 5, 2, 4, 5, 3, 3, 2, 2, 4, 5,...
## $ absences   <int> 5, 3, 8, 1, 2, 8, 0, 4, 0, 0, 1, 2, 1, 1, 0, 5, 8, 3, 9,...
## $ G1         <int> 2, 7, 10, 14, 8, 14, 12, 8, 16, 13, 12, 10, 13, 11, 14, ...
## $ G2         <int> 8, 8, 10, 14, 12, 14, 12, 9, 17, 14, 11, 12, 14, 11, 15,...
## $ G3         <int> 8, 8, 11, 14, 12, 14, 12, 10, 18, 14, 12, 12, 13, 12, 16...
## $ alc_use    <dbl> 1.0, 1.0, 2.5, 1.0, 1.5, 1.5, 1.0, 1.0, 1.0, 1.0, 1.5, 1...
## $ high_use   <lgl> FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE, F...
gather(alc) %>% glimpse
## Rows: 13,370
## Columns: 2
## $ key   <chr> "school", "school", "school", "school", "school", "school", "...
## $ value <chr> "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "...
g <- gather(alc) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free")
g + geom_bar()

Key-value pairs of the data.

my_model <- lm(high_use  ~ goout + absences + studytime + activities, data = alc )
summary(my_model)
## 
## Call:
## lm(formula = high_use ~ goout + absences + studytime + activities, 
##     data = alc)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -0.8109 -0.3015 -0.1403  0.3580  1.0860 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)    0.023625   0.086675   0.273 0.785336    
## goout          0.134130   0.019165   6.999 1.19e-11 ***
## absences       0.013991   0.003939   3.552 0.000430 ***
## studytime     -0.088581   0.025683  -3.449 0.000626 ***
## activitiesyes -0.047960   0.042727  -1.122 0.262380    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4146 on 377 degrees of freedom
## Multiple R-squared:  0.1897, Adjusted R-squared:  0.1811 
## F-statistic: 22.06 on 4 and 377 DF,  p-value: 2.228e-16

The t-value measures the size of the difference relative to the variation, so the bigger the number the greater the evidence against the null hypothesis. Goout, absences, and studytime have t-value great enough to refute the 0-hypotheses. p-value (Pr) is less than 0.05 for all those three. Based on the results, null hypotheses can be refute for goout, absence, and studytime. It cannot be refute for activities. We will build the model without activities. As expected it looks like goout and absences increse the alcohol consumption and studytime degreses it. Activities do not seen to have a clear affect.

my_model2 <- lm(high_use  ~ goout + absences + studytime, data = alc )
summary(my_model2)
## 
## Call:
## lm(formula = high_use ~ goout + absences + studytime, data = alc)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -0.7837 -0.2938 -0.1357  0.3622  1.0642 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  0.007067   0.085440   0.083 0.934125    
## goout        0.133271   0.019156   6.957 1.54e-11 ***
## absences     0.013966   0.003940   3.545 0.000442 ***
## studytime   -0.091472   0.025563  -3.578 0.000391 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4148 on 378 degrees of freedom
## Multiple R-squared:  0.187,  Adjusted R-squared:  0.1805 
## F-statistic: 28.97 on 3 and 378 DF,  p-value: < 2.2e-16

alcohol consumption = 0.01 + 0.13 * goout + 0.01 * absences - 0.10 * studytime

Residual Standard Error: Standard deviation of residuals / errors of the regression model. Multiple R-Squared: Percent of the variance of exam intact after subtracting the error of the model. Adjusted R-Squared: how well the model fits the data, i.e. the percentage of the dependent variable variation that the linear model explains (ranging between 0 and 1). The R-squared is quite low so there is probably something in residual plots we should investigate.

par(mfrow = c(2,2))
plot(my_model2, which=c(1,2,5))

The residual vs Fitted shows that the residuals are not at all on the regression line. QQ-plot shows that the datapoints really do not follow the regression line well. Residulas vs leverage plot shows most of the points in the beginning of the line. Most likely the model is not linear. (There are no points outside the Cook’s distance, so no big outliers.)

# grouping the data by goout, absences and studytime. counting the count and the mean of alc_use.
alc %>% group_by(goout, absences, studytime) %>% summarise(count = n(), mean_grade=mean(high_use))
## `summarise()` regrouping output by 'goout', 'absences' (override with `.groups` argument)
## # A tibble: 165 x 5
## # Groups:   goout, absences [77]
##    goout absences studytime count mean_grade
##    <int>    <int>     <int> <int>      <dbl>
##  1     1        0         1     4        0  
##  2     1        0         2     2        0  
##  3     1        1         1     1        0  
##  4     1        1         2     3        0  
##  5     1        1         4     1        0  
##  6     1        2         1     2        0.5
##  7     1        2         2     1        0  
##  8     1        3         2     1        0  
##  9     1        5         3     1        1  
## 10     1        8         1     1        0  
## # ... with 155 more rows

Those with 0 or 1 as mean_grade are low and high in comsumption but other values have variance. For example if we see a student with go out=5, absences=19, and studytime=2, the data shows high consumption. But a student with the same go out and studytime but even more absence (21) shows low consumptiton. Since there are no outliers this must be a true data point and the regression is not linear.

library(ggplot2)
g1 <- ggplot(alc, aes(x = high_use, y = goout))
g1 + geom_boxplot() + ylab("go out")

Base on the box plot is shows that high_use and going out a lot have a correlation.

g2 <- ggplot(alc, aes(x = high_use, y = absences))
g2 + geom_boxplot() + ylab("absences")

Based on the box plot it loos like more absences means more alcohol consumption. There are some exceptions though.

g3 <- ggplot(alc, aes(x = high_use, y = studytime))
g3 + geom_boxplot() + ylab("study time")

Base on this box plot the more students spent time on studying the less they consume alcohol. Let’s build a logistic model (my_model3).

my_model3 <- glm(high_use ~ goout + absences + studytime, data = alc, family = "binomial")
summary(my_model3)
## 
## Call:
## glm(formula = high_use ~ goout + absences + studytime, family = "binomial", 
##     data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.8457  -0.7733  -0.5178   0.8432   2.5036  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -2.48582    0.52982  -4.692 2.71e-06 ***
## goout        0.72735    0.11786   6.171 6.78e-10 ***
## absences     0.07011    0.02204   3.181 0.001470 ** 
## studytime   -0.56048    0.16672  -3.362 0.000774 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 390.14  on 378  degrees of freedom
## AIC: 398.14
## 
## Number of Fisher Scoring iterations: 4

Let’s see the coefficients of the model.

coef(my_model3)
## (Intercept)       goout    absences   studytime 
## -2.48582049  0.72734718  0.07011218 -0.56048258

goout nad studytime has stronger coeffience on high_use than absences.

# compute odds ratios (OR)
OR <- coef(my_model3) %>% exp
# compute confidence intervals (CI)
CI <- confint(my_model3) %>% exp
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
##                     OR      2.5 %    97.5 %
## (Intercept) 0.08325721 0.02864231 0.2297441
## goout       2.06958310 1.65203749 2.6250666
## absences    1.07262851 1.02840235 1.1225709
## studytime   0.57093348 0.40733791 0.7846264

Odds Ratio is a measure of the strength of association with an exposure and an outcome. OR > 1 means greater odds of association with the exposure and outcome, the X is positively associated with “success” in our case high consumption of alcohol. Goout clearly has great odds, absences not that clear (1.07 > 1) but still has, and studytime (<1) means lower odds of association between the exposure and outcome. Confidence intervals (2.5 and 97.5) shows the confidence of odds ratio.

# predict() the probability of high_use
probabilities <- predict(my_model3, type = "response")
# add the predicted probabilities to 'alc'
alc <- mutate(alc, probability = probabilities)
# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = probability > 0.5)
# see the first ten original classes, predicted probabilities, and class predictions
select(alc, failures, absences, sex, high_use, probability, prediction) %>% head(10)
##    failures absences sex high_use probability prediction
## 1         0        5   F    FALSE  0.41414989      FALSE
## 2         0        3   F    FALSE  0.22892212      FALSE
## 3         2        8   F     TRUE  0.16921600      FALSE
## 4         0        1   F    FALSE  0.06645515      FALSE
## 5         0        2   F    FALSE  0.11796259      FALSE
## 6         0        8   M    FALSE  0.16921600      FALSE
## 7         0        0   M    FALSE  0.33238962      FALSE
## 8         0        4   F    FALSE  0.39724726      FALSE
## 9         0        0   M    FALSE  0.10413596      FALSE
## 10        0        0   M    FALSE  0.05317940      FALSE

This shows the prediction and the probability to that prediction. Prediction is compared to the true value (high_use) to see how good it is.

# create the confusion matrix, tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   246   22
##    TRUE     66   48

The number of correct predictions for false (true negatives) is 246 and the incorrect (false positives) is 22. The number of correct predictions for true (true positives) is 48 and the incorrect (false negatives) is 66. The model can predict students that do not consume high amount of alcohol quite well but it cannot predict those consuming a lot as well.

# initialize a plot of 'high_use' versus 'probability' in 'alc'
g11 <- ggplot(alc, aes(x = probability, y = high_use ))

g11 + geom_point(aes(col = prediction)) + ylab("high use")

# confusion matrix with probabilities
table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table() %>% addmargins()
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.64397906 0.05759162 0.70157068
##    TRUE  0.17277487 0.12565445 0.29842932
##    Sum   0.81675393 0.18324607 1.00000000

This proves the analyses made earlier: the prediction for false sís much better than the one for true.

# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc$high_use, prob = 0)
## [1] 0.2984293

The probability for a wrong prediction is about 30%.

loss_func(class = alc$high_use, prob = 1)
## [1] 0.7015707

And the probability for a correct prediction is about 70%.

# probability based on the column probability
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2303665

explain…

# 10-fold cross-validation
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model3, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2382199

The error rate is a little bit better (0.24) than the one in DataCamp (0.26).

# 10-fold cross-validation for different models
# "school","sex","age","address","famsize","Pstatus","Medu","Fedu","Mjob","Fjob","reason","nursery","internet",
# "guardian","traveltime","studytime","failures","schoolsup","famsup","paid","activities","higher","romantic",
# "famrel","freetime","goout","Dalc","Walc","health","absences","G1","G2","G3","alc_use","high_use"
my_model4 <- glm(high_use ~ school + sex + age + Pstatus + Medu + Fedu + Mjob + Fjob + reason + nursery + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + higher + romantic + famrel + freetime + goout + health + absences + G1 + G2+ G3, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model4, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2670157

Using a model with many predictors is not useful since the error rate is higher than for the model with less predictors.

my_model5 <- glm(high_use ~ sex + age + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + higher + romantic + famrel + freetime + goout + health + absences + G1 + G2+ G3, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model5, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2251309

The error rate gets smaller when reducing the predictors (those that have no correlation to high_usage).

my_model6 <- glm(high_use ~ sex + age + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + activities + higher + romantic + freetime + goout + health + absences + G1 + G2+ G3, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model6, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2356021

The error rate gets smaller when reducing the predictors (those that have no correlation to high_usage).


Clustering and classification

# the Boston data from the MASS package
# access the MASS package
library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
# load the data
data("Boston")
# explore the dataset
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...

The dataset is Housing Values in Suburbs of Boston This data frame contains the following columns: crim, per capita crime rate by town. zn, proportion of residential land zoned for lots over 25,000 sq.ft. indus, proportion of non-retail business acres per town. chas, Charles River dummy variable (= 1 if tract bounds river; 0 otherwise). nox, nitrogen oxides concentration (parts per 10 million). rm, average number of rooms per dwelling. age, proportion of owner-occupied units built prior to 1940. dis, weighted mean of distances to five Boston employment centres. rad, index of accessibility to radial highways. tax, full-value property-tax rate per $10,000. ptratio, pupil-teacher ratio by town. black, 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town. lstat, lower status of the population (percent). medv, median value of owner-occupied homes in $1000s.

chas and rad are of type integer, the rest of the variables are of type number.

summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

summary shows the min, max, and the first, the second (meadian), and the third quantum of each variable of the dataset.

dim(Boston)
## [1] 506  14

The dataset has 506 rows and 14 columns.

# plot matrix of the variables
pairs(Boston[-1])

Nox and dis, rm and lstat, rm and medv, lstat and medv, have some kind of linear pattern.

library(corrplot)
## corrplot 0.84 loaded
library(tidyverse)
## -- Attaching packages --------------------------------------- tidyverse 1.3.0 --
## v tibble  3.0.4     v stringr 1.4.0
## v readr   1.4.0     v forcats 0.5.0
## v purrr   0.3.4
## -- Conflicts ------------------------------------------ tidyverse_conflicts() --
## x dplyr::filter() masks stats::filter()
## x dplyr::lag()    masks stats::lag()
## x MASS::select()  masks dplyr::select()
# calculate the correlation matrix and round it
cor_matrix<-cor(Boston) 

# print the correlation matrix
corrplot(cor_matrix, method="circle")

crim correlates strongly with rad and tax, zn with dis, indus with nox, age, rad, tax, lstat and dis, nox with indus, age, rad, tax, lstst and dis, rm with medv, age with indus, nox, lstat and lstat, dis with zn, indus, nox and age, rad with crim, indus, nox and especially tax, tax with crim, indus, nox, lstat and especially rad, lstat with indus, rm, nox, age, medv, medv with rm and lstat.

library(GGally)
library(ggplot2)
p <- ggpairs(Boston, mapping = aes(), lower = list(combo = wrap("facethist", bins = 20)))
p

Only rm looks like it’s almost normally distributed. The data needs to be scaled.

# center and standardize variables
boston_scaled <- scale(Boston)
# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865

The scale (min and max) has changed for all the variables.

# change the object to data frame so that it will be easier to use the data
boston_scaled <- as.data.frame(boston_scaled)
class(boston_scaled)
## [1] "data.frame"

Our next job is to create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate) using quantiles as the break points.

# summary of the scaled crime rate
summary(boston_scaled$crim)
##      Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
## -0.419367 -0.410563 -0.390280  0.000000  0.007389  9.924110

The min value is -0.42 and the max value is 9.92. The 1. quantile is -0.41, the second is -0.39 and the third is 0.007.

# create a quantile vector of crim
bins <- quantile(boston_scaled$crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610

These would be the limits for each category.

# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE)
# look at the table of the new factor crime
table(crime)
## crime
## [-0.419,-0.411]  (-0.411,-0.39] (-0.39,0.00739]  (0.00739,9.92] 
##             127             126             126             127

127 values have been assigned to first and last category, 126 to the second and third. Values between -0.419 and -0.411 are in category one. Values between -0.411 and -0.39 are in category two. Values between -0.39 and 0.00739 are in category three. Values between 0.00739 and 9.92 are in category four. Let’s lable those categories with labels low, med_low, med_high, and high.

crime <- cut(boston_scaled$crim, breaks = bins, labels=c("low", "med_low", "med_high", "high"), include.lowest = TRUE)
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127

Now the categories have names. Next we can remove the original variable (crim) from the scaled dataset.

boston_scaled <- dplyr::select(boston_scaled, -crim)
colnames(boston_scaled)
##  [1] "zn"      "indus"   "chas"    "nox"     "rm"      "age"     "dis"    
##  [8] "rad"     "tax"     "ptratio" "black"   "lstat"   "medv"

And then we can add the new categorized variable (crime) to the dataset.

boston_scaled <- data.frame(boston_scaled, crime)
summary(boston_scaled)
##        zn               indus              chas              nox         
##  Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723   Min.   :-1.4644  
##  1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723   1st Qu.:-0.9121  
##  Median :-0.48724   Median :-0.2109   Median :-0.2723   Median :-0.1441  
##  Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723   3rd Qu.: 0.5981  
##  Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648   Max.   : 2.7296  
##        rm               age               dis               rad         
##  Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658   Min.   :-0.9819  
##  1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049   1st Qu.:-0.6373  
##  Median :-0.1084   Median : 0.3171   Median :-0.2790   Median :-0.5225  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617   3rd Qu.: 1.6596  
##  Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566   Max.   : 1.6596  
##       tax             ptratio            black             lstat        
##  Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033   Min.   :-1.5296  
##  1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049   1st Qu.:-0.7986  
##  Median :-0.4642   Median : 0.2746   Median : 0.3808   Median :-0.1811  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332   3rd Qu.: 0.6024  
##  Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406   Max.   : 3.5453  
##       medv              crime    
##  Min.   :-1.9063   low     :127  
##  1st Qu.:-0.5989   med_low :126  
##  Median :-0.1449   med_high:126  
##  Mean   : 0.0000   high    :127  
##  3rd Qu.: 0.2683                 
##  Max.   : 2.9865

Now the data is ready and we can start working with it. First we divide the data into training (80%) and testing (20%) sets.

# number of rows in the Boston dataset 
n <- 506
# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)
# create train set from that 80%
train <- boston_scaled[ind,]
# create test set from the remaining data
test <- boston_scaled[-ind,]

train dataset has 404 rows and 14 columns. test dataset has 102 rows and 14 columns. Let’s train a Linear Discriminant analysis (LDA) classification model. Crime is the target variable.

lda.fit <- lda(crime ~ . , data = train)
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2599010 0.2524752 0.2376238 0.2500000 
## 
## Group means:
##                   zn      indus        chas        nox         rm        age
## low       1.05000724 -0.8917100 -0.08484810 -0.9104166  0.4395456 -0.9168821
## med_low  -0.07381211 -0.3147546 -0.04073494 -0.5801415 -0.1106048 -0.3765485
## med_high -0.37826080  0.2471770  0.13778554  0.4373572  0.1568852  0.4333766
## high     -0.48724019  1.0171306 -0.03844192  1.0628141 -0.4370939  0.8008818
##                 dis        rad        tax     ptratio      black         lstat
## low       0.9888967 -0.6854573 -0.7074265 -0.47699882  0.3715117 -0.7557521744
## med_low   0.3599472 -0.5348698 -0.4539752 -0.07138896  0.3498424 -0.1607143535
## med_high -0.4019522 -0.4327604 -0.3017861 -0.32733322  0.0382247  0.0004608454
## high     -0.8441014  1.6379981  1.5139626  0.78062517 -0.8899753  0.9322193568
##                  medv
## low       0.493069632
## med_low   0.007412558
## med_high  0.230428729
## high     -0.701376003
## 
## Coefficients of linear discriminants:
##                 LD1         LD2         LD3
## zn       0.08597472  0.58233695 -0.96501501
## indus    0.05772263 -0.25247520  0.04767239
## chas    -0.07698443  0.01567354  0.07777614
## nox      0.39510113 -0.83056654 -1.23887203
## rm      -0.10775460 -0.09945630 -0.15441338
## age      0.17125165 -0.26469206 -0.16617743
## dis     -0.05461387 -0.13935386 -0.07577505
## rad      3.35406716  0.94340568 -0.13159726
## tax      0.01493065  0.11574470  0.56734358
## ptratio  0.13361736 -0.08008156 -0.22413610
## black   -0.12149419  0.04212662  0.18180874
## lstat    0.24612063 -0.23052383  0.33099266
## medv     0.22759224 -0.46400825 -0.22028030
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9488 0.0395 0.0117

Prior probabilities of groups: the proportion of training observations in each group. Prior probabilities of groups: low med_low med_high high 0.2301980 0.2475248 0.2549505 0.2673267

The observations are quite equalli distributed to all the groups (all in the range of 23%-27%).

Group means: group center of gravity, the mean of each variable in each group.

Coefficients of linear discriminants: the linear combination of predictor variables that are used to form the LDA decision rule. For example LD1 = 0.13zn + 0.04indus - 0.11chas + 0.37nox - 0.16rm + 0.22age - 0.08dis + 3.42rad + 0.01tax + 0.11ptratio - 0.12black + 0.17lstat + 0.16*medv Proportion of trace is the percentage separation achieved by each discriminant function: LD1 LD2 LD3 0.9576 0.0328 0.0096

LD1 seems to be 95.76% whereas the other LDs are not very high.

Let’s define the arrows, create a numeric vector of the train sets crime classes, and draw a biplot

lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)

The colour indicates each category. Let’s add the arrows we specified earlier.

plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 5)

Next we will take the crime classes from the test and save them as correct_classes (so that we can compare to it when testing) and remove the crime variable from the test dataset so that we can predict is using the model we will build.

correct_classes <- test$crime
class(correct_classes)
## [1] "factor"
test <- dplyr::select(test, -crime)
colnames(test)
##  [1] "zn"      "indus"   "chas"    "nox"     "rm"      "age"     "dis"    
##  [8] "rad"     "tax"     "ptratio" "black"   "lstat"   "medv"

There is no longer crime variable in the test dataset. Let’s use the model and predict using the test dataset. Then we compare the predictions to the correct_classes.

lda.pred <- predict(lda.fit, newdata = test)
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low        9      12        1    0
##   med_low    3      17        4    0
##   med_high   0      14       14    2
##   high       0       0        0   26

For the high category the model made excellent predictions, 19/19. For med_high 12/23, for med_low 17/26, and for low 25/34 was correctly predicted.

Clustering

# load the Boston dataset, scale it and create the euclidean distance matrix
library(MASS)
data('Boston')
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)
dist_eu <- dist(boston_scaled, method = "euclidean", diag = FALSE, upper = FALSE, p = 4)
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970

euclidean distance is a usual distance between the two vectors

Let’s calculate the manhattan distance.

dist_man <- dist(boston_scaled, method = "manhattan", diag = FALSE, upper = FALSE, p = 4)
summary(dist_man)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.2662  8.4832 12.6090 13.5488 17.7568 48.8618

manhattan distance is an absolute distance between the two vectors

K-means clustering

km <-kmeans(boston_scaled, centers = 4)
pairs(boston_scaled, col = km$cluster)

Above we can see K-means clustering using 4 clusters, each identified by a different color.

What is the best k, number of clusters? One way to determine k is to look at how the total of within cluster sum of squares (WCSS) behaves when the number of cluster changes. When you plot the number of clusters and the total WCSS, the optimal number of clusters is when the total WCSS drops radically. Note that K-means randomly assigns the initial cluster centers and therefore might produce different results every time.

set.seed(900)
k_max <- 10
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})
qplot(x = 1:k_max, y = twcss, geom = 'line')

It looks like 2 is the optimal number of clusters since the curve changes dramatically on k=2.

Let’s create k-means using 2 as number of clusters.

km <-kmeans(boston_scaled, centers = 2)
pairs(boston_scaled, col = km$cluster)

med and rm, rm and lstat, rm and medv are the only ones having linear pattern. medv and lstat, dis and nox have a curved, non-linear pattern.

Bonus.

library(MASS)
data('Boston')
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)

boston_scaled <- dplyr::select(boston_scaled, -crim)
n <- 506
ind <- sample(n,  size = n * 0.8)
ktrain <- boston_scaled[ind,]
ktest <- boston_scaled[-ind,]
km <-kmeans(ktrain, centers = 4)
#length(km)
lda.fit <- lda(km$cluster ~ . , data = ktrain)
lda.fit
## Call:
## lda(km$cluster ~ ., data = ktrain)
## 
## Prior probabilities of groups:
##         1         2         3         4 
## 0.1064356 0.3143564 0.4133663 0.1658416 
## 
## Group means:
##            zn      indus        chas         nox         rm        age
## 1 -0.02556311 -0.4214017  1.65044081 -0.06341642  1.3347678  0.2238756
## 2 -0.48724019  1.1535174 -0.08632433  1.13408537 -0.4174781  0.8283821
## 3 -0.35206167 -0.4075331 -0.27232907 -0.42141667 -0.2310348 -0.1412526
## 4  1.77276888 -1.0794322 -0.27232907 -1.12647984  0.5809091 -1.4036878
##          dis        rad        tax     ptratio      black      lstat       medv
## 1 -0.3483776 -0.3942834 -0.6093748 -1.02573014  0.2939814 -0.7238248  1.3805896
## 2 -0.8624951  1.1061684  1.2066260  0.60355843 -0.5855684  0.8635993 -0.7237526
## 3  0.1691226 -0.6043213 -0.6192280  0.05262372  0.3101287 -0.1548870 -0.1081300
## 4  1.4940692 -0.6064768 -0.5669409 -0.61647652  0.3518842 -0.8690652  0.6220355
## 
## Coefficients of linear discriminants:
##                  LD1          LD2          LD3
## zn       0.003479948 -1.311689426 -0.761115369
## indus    0.936737602 -0.407503993 -0.181794321
## chas    -0.167644631  0.631026345 -0.770943356
## nox      0.896989707 -0.452138083 -0.272352528
## rm      -0.034025553  0.165801674 -0.615581321
## age     -0.044459412  0.599126833  0.012642565
## dis     -0.088521463 -0.629471813  0.005214464
## rad      0.642699177  0.117578513 -0.364357886
## tax      0.422662032 -0.667438098 -0.131882972
## ptratio  0.265080739 -0.157872219  0.136575290
## black   -0.056390985 -0.002398193  0.054300281
## lstat    0.311829110  0.026941745 -0.480960215
## medv     0.064842317  0.292044772 -0.831575220
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.6545 0.2024 0.1431

Prior probabilities of groups: the proportion of training observations in each group. Prior probabilities of groups: 1 2 3 4 0.09405941 0.40346535 0.16089109 0.34158416 For example 40% of the observations belong to group 2. Group means: group center of gravity, the mean of each variable in each group. Coefficients of linear discriminants: the linear combination of predictor variables that are used to form the LDA decision rule. For example LD1 = -0.13zn + 0.80indus - 0.15chas + 0.96nox + 0.09rm - 0.15age - 0.08dis + 0.58rad + 0.56tax + 0.22ptratio + 0.01black + 0.26lstat - 0.31*medv Proportion of trace is the percentage separation achieved by each discriminant function: LD1 LD2 LD3 0.6937 0.2138 0.0925 0.6937 + 0.2138 + 0.0925 = 1

lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)

Super-Bonus

model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
library(plotly)
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
## 
##     select
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
# 3D plot by crime (test)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color= train$crime)
## Warning: `arrange_()` is deprecated as of dplyr 0.7.0.
## Please use `arrange()` instead.
## See vignette('programming') for more help
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.
# 3D plot by k means cluster
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color= km$cluster)

The plots (coloring) are very different but the shape is same because the datapoints are the same. The first plot shows the level of crimes and the second shows those datapoints as on what cluster they belong to.


Dimensionality reduction techniques

human <- read.csv("C:/Users/Heli/Heli/HY/Introduction to Open Data Science/Projects/IODS-project/data\\human2.csv", sep=",", dec = ".", header=TRUE)
human
##       X edu2F labM LifeExpect ExpectYrsEd GNIperCapita MMRatio BirthRate
## 1     1   5.9 79.5       60.4         9.3         1885     400      86.8
## 2     2  81.8 65.5       77.8        11.8         9943      21      15.3
## 3     3  26.7 72.2       74.8        14.0        13054      89      10.0
## 4     4  56.3 75.0       76.3        17.9        22050      69      54.4
## 5     5  94.0 72.6       74.7        12.3         8124      29      27.1
## 6     6  94.3 71.8       82.4        20.2        42261       6      12.1
## 7     7 100.0 67.7       81.4        15.7        43869       4       4.1
## 8     8  93.7 69.6       70.8        11.9        16428      26      40.0
## 9     9  91.2 79.3       75.4        12.6        21336      37      28.5
## 10   10  56.7 86.9       76.6        14.4        38599      22      13.8
## 11   11  34.1 84.1       71.6        10.0         3191     170      80.6
## 12   12  89.5 76.6       75.6        15.4        12488      52      48.4
## 13   13  87.0 63.1       71.3        15.7        16676       1      20.6
## 14   14  77.5 59.3       80.8        16.3        41187       6       6.7
## 15   15  76.4 82.3       70.0        13.6         7614      45      71.4
## 16   16  11.3 78.3       59.6        11.1         1767     340      90.2
## 17   17  34.0 77.2       69.5        12.6         7176     120      40.9
## 18   18  47.6 80.9       68.3        13.2         5760     200      71.9
## 19   19  44.9 57.3       76.5        13.6         9638       8      15.1
## 20   20  73.6 81.6       64.5        12.5        16646     170      44.2
## 21   21  54.6 80.8       74.5        15.2        15175      69      70.8
## 22   22  93.0 59.0       74.2        14.4        15596       5      35.9
## 23   23   0.9 90.0       58.7         7.8         1591     400     115.4
## 24   24   5.3 82.0       56.7        10.1          758     740      30.3
## 25   25  14.0 81.4       51.5         8.9         3171     720     130.3
## 26   26   9.9 86.5       68.4        10.9         2949     170      44.3
## 27   27  21.3 76.8       55.5        10.4         2803     590     115.8
## 28   28 100.0 71.0       82.0        15.9        42155      11      14.5
## 29   29  10.1 85.1       50.7         7.2          581     880      98.3
## 30   30   1.7 79.2       51.6         7.4         2085     980     152.0
## 31   31  73.3 74.8       81.7        15.2        21290      22      55.3
## 32   32  58.7 78.3       75.8        13.1        12547      32       8.6
## 33   33  56.9 79.7       74.0        13.5        12040      83      68.5
## 34   34  39.7 73.0       62.3        11.1         6012     410     126.7
## 35   35  12.8 73.2       58.7         9.8          680     730     135.3
## 36   36  50.7 79.0       79.4        13.9        13413      38      60.8
## 37   37  85.0 58.4       77.3        14.8        19409      13      12.7
## 38   38  74.3 70.0       79.4        13.8         7301      80      43.1
## 39   39  76.0 71.1       80.2        14.0        28633      10       5.5
## 40   40  99.9 68.3       78.6        16.4        26660       5       4.9
## 41   41  95.5 66.4       80.2        18.7        44025       5       5.1
## 42   42  55.6 78.6       73.5        13.1        11883     100      99.6
## 43   43  40.1 82.7       75.9        14.2        10605      87      77.0
## 44   44  43.9 74.8       71.1        13.5        10512      45      43.0
## 45   45  36.8 79.0       73.0        12.3         7349      69      76.0
## 46   46 100.0 68.9       76.8        16.5        25214      11      16.8
## 47   47   7.8 89.3       64.1         8.5         1428     420      78.4
## 48   48  64.2 72.0       70.0        15.7         7493      59      42.8
## 49   49 100.0 64.0       80.8        17.1        38695       4       9.2
## 50   50  78.0 61.6       82.2        16.0        38056      12       5.7
## 51   51  53.9 65.4       64.4        12.5        16367     240     103.0
## 52   52  17.4 82.9       60.2         8.8         1507     430     115.8
## 53   53  89.7 75.1       74.9        13.8         7164      41      46.8
## 54   54  96.3 66.4       80.9        16.5        43919       7       3.8
## 55   55  45.2 71.4       61.4        11.5         3852     380      58.4
## 56   56  59.5 62.5       80.9        17.6        24524       5      11.9
## 57   57  21.9 88.2       71.8        10.7         6929     140      97.2
## 58   58  60.3 80.5       66.4        10.3         6522     250      88.5
## 59   59  22.4 71.0       62.8         8.7         1669     380      42.0
## 60   60  28.0 82.9       73.1        11.1         3938     120      84.0
## 61   61  97.9 60.0       75.2        15.4        22916      14      12.1
## 62   62  91.0 77.4       82.6        19.0        35182       4      11.5
## 63   63  27.0 79.9       68.0        11.7         5497     190      32.8
## 64   64  39.9 84.2       68.9        13.0         9788     190      48.3
## 65   65  62.2 73.6       75.4        15.1        15440      23      31.6
## 66   66  27.8 69.8       69.4        10.1        14003      67      68.7
## 67   67  80.5 68.1       80.9        18.6        39568       9       8.2
## 68   68  84.4 69.1       82.4        16.0        30676       2       7.8
## 69   69  71.2 59.5       83.1        16.0        33030       4       4.0
## 70   70  74.0 70.9       75.7        12.4         7415      80      70.1
## 71   71  87.0 70.4       83.5        15.3        36927       6       5.4
## 72   72  69.5 66.6       74.0        13.5        11365      50      26.5
## 73   73  95.3 77.9       69.4        15.0        20867      26      29.9
## 74   74  25.3 72.4       61.6        11.0         2762     400      93.6
## 75   75  77.0 72.1       81.9        16.9        33890      27       2.2
## 76   76  55.6 83.1       74.4        14.7        83961      14      14.5
## 77   77  94.5 79.5       70.6        12.5         3044      75      29.3
## 78   78  98.9 67.6       74.2        15.2        22281      13      13.5
## 79   79  53.0 70.9       79.3        13.8        16509      16      12.0
## 80   80  21.9 73.5       49.8        11.1         3306     490      89.4
## 81   81  15.4 64.8       60.9         9.5          805     640     117.4
## 82   82  55.5 76.4       71.6        14.0        14911      15       2.5
## 83   83  89.1 67.3       73.3        16.4        24500      11      10.6
## 84   84 100.0 64.6       81.7        13.9        58711      11       8.3
## 85   85  11.1 81.5       62.8        10.8          747     510     144.8
## 86   86  65.1 75.5       74.7        12.7        22762      29       5.7
## 87   87  27.3 77.5       76.8        13.0        12328      31       4.2
## 88   88   7.7 81.4       58.0         8.4         1583     550     175.6
## 89   89  68.6 66.3       80.6        14.4        27930       9      18.2
## 90   90   8.3 79.1       63.1         8.5         3560     320      73.3
## 91   91  49.4 74.2       74.4        15.6        17470      73      30.9
## 92   92  55.7 79.9       76.8        13.1        16056      49      63.4
## 93   93  93.6 44.2       71.6        11.9         5223      21      29.3
## 94   94  85.3 69.3       69.4        14.6        10729      68      18.7
## 95   95  84.2 57.3       76.2        15.2        14558       7      15.2
## 96   96  20.7 75.8       74.0        11.6         6850     120      35.8
## 97   97   1.4 82.8       55.1         9.3         1123     480     137.8
## 98   98  22.9 82.3       65.9         8.6         4608     200      12.1
## 99   99  33.3 63.7       64.8        11.3         9418     130      54.9
## 100 100  17.7 87.1       69.6        12.4         2311     190      73.7
## 101 101  87.7 70.6       81.6        17.9        45435       6       6.2
## 102 102  95.0 73.8       81.8        19.2        32689       8      25.3
## 103 103  39.4 80.3       74.9        11.5         4457     100     100.8
## 104 104   2.4 89.7       61.4         5.4          908     630     204.8
## 105 105  97.4 68.7       81.6        17.5        64992       4       7.8
## 106 106  47.2 82.6       76.8        13.6        34858      11      10.6
## 107 107  19.3 82.9       66.2         7.8         4866     170      27.3
## 108 108  54.0 81.8       77.6        13.3        18192      85      78.5
## 109 109   7.6 74.0       62.6         9.9         2463     220      62.1
## 110 110  36.8 84.8       72.9        11.9         7643     110      67.0
## 111 111  56.3 84.4       74.6        13.1        11015      89      50.7
## 112 112  65.9 79.7       68.2        11.3         7915     120      46.8
## 113 113  79.4 64.9       77.4        15.5        23177       3      12.2
## 114 114  47.7 66.2       80.9        16.3        25757       8      12.6
## 115 115  66.7 95.5       78.2        13.8       123124       6       9.5
## 116 116  86.1 64.9       74.7        14.2        18108      33      31.0
## 117 117  89.6 71.7       70.1        14.7        22352      24      25.7
## 118 118   8.0 85.3       64.2        10.3         1458     320      33.6
## 119 119  64.3 58.4       73.4        12.9         5327      58      28.3
## 120 120  60.5 78.3       74.3        16.3        52821      16      10.2
## 121 121   7.2 88.0       66.5         7.9         2188     320      94.4
## 122 122  58.4 60.9       74.9        14.4        12190      16      16.9
## 123 123  10.0 69.0       50.9         8.6         1780    1100     100.7
## 124 124  74.1 77.2       83.0        15.4        76628       6       6.0
## 125 125  99.1 68.6       76.3        15.1        25845       7      15.9
## 126 126  95.8 63.2       80.4        16.8        27852       7       0.6
## 127 127  72.7 60.5       57.4        13.6        12122     140      50.9
## 128 128  66.8 65.8       82.6        17.3        32045       4      10.6
## 129 129  72.7 76.3       74.9        13.7         9779      29      16.9
## 130 130  12.1 76.0       63.5         7.0         3809     360      84.0
## 131 131  44.6 68.8       71.1        12.7        15617     130      35.2
## 132 132  21.9 71.6       49.0        11.3         5542     310      72.0
## 133 133  86.5 67.9       82.2        15.8        45636       4       6.5
## 134 134  95.0 74.9       83.0        15.8        56431       6       1.9
## 135 135  29.5 72.7       69.6        12.3         2728      49      41.6
## 136 136  95.1 77.1       69.4        11.2         2517      44      42.8
## 137 137   5.6 90.2       65.0         9.2         2411     410     122.7
## 138 138  35.7 80.7       74.4        13.5        13323      26      41.0
## 139 139  40.2 67.5       75.4        13.4        11780       7      18.3
## 140 140  16.1 81.3       59.7        12.2         1228     450      91.5
## 141 141  87.5 74.6       72.8        14.7         5069     120      18.1
## 142 142  59.7 75.5       70.4        12.3        26090      84      34.8
## 143 143  32.8 70.9       74.8        14.6        10404      46       4.6
## 144 144  39.0 70.8       75.3        14.5        18677      20      30.9
## 145 145  22.9 79.2       58.5         9.8         1613     360     126.6
## 146 146  91.7 66.9       71.0        15.1         8178      23      25.7
## 147 147  73.1 92.0       77.0        13.3        60868       8      27.6
## 148 148  99.8 68.7       80.7        16.2        39267       8      25.8
## 149 149  95.1 68.9       79.1        16.5        52947      28      31.0
## 150 150  54.4 76.8       77.2        15.5        19283      14      58.3
## 151 151  56.6 79.2       74.2        14.2        16159     110      83.2
## 152 152  59.4 82.2       75.8        11.9         5092      49      29.0
## 153 153   8.6 72.2       63.8         9.2         3519     270      47.0
## 154 154  25.8 85.6       60.1        13.5         3734     280     125.4
## 155 155  48.7 89.7       57.5        10.9         1615     470      60.3
##     PercRepresinParliament
## 1                     27.6
## 2                     20.7
## 3                     25.7
## 4                     36.8
## 5                     10.7
## 6                     30.5
## 7                     30.3
## 8                     15.6
## 9                     16.7
## 10                    15.0
## 11                    20.0
## 12                    19.6
## 13                    30.1
## 14                    42.4
## 15                    13.3
## 16                     8.4
## 17                     8.3
## 18                    51.8
## 19                    19.3
## 20                     9.5
## 21                     9.6
## 22                    20.4
## 23                    13.3
## 24                    34.9
## 25                     9.2
## 26                    19.0
## 27                    27.1
## 28                    28.2
## 29                    12.5
## 30                    14.9
## 31                    15.8
## 32                    23.6
## 33                    20.9
## 34                    11.5
## 35                     8.2
## 36                    33.3
## 37                    25.8
## 38                    48.9
## 39                    12.5
## 40                    18.9
## 41                    38.0
## 42                    19.1
## 43                    41.6
## 44                     2.2
## 45                    27.4
## 46                    19.8
## 47                    25.5
## 48                    14.0
## 49                    42.5
## 50                    25.7
## 51                    16.2
## 52                     9.4
## 53                    11.3
## 54                    36.9
## 55                    10.9
## 56                    21.0
## 57                    13.3
## 58                    31.3
## 59                     3.5
## 60                    25.8
## 61                    10.1
## 62                    41.3
## 63                    12.2
## 64                    17.1
## 65                     3.1
## 66                    26.5
## 67                    19.9
## 68                    22.5
## 69                    30.1
## 70                    16.7
## 71                    11.6
## 72                    11.6
## 73                    20.1
## 74                    20.8
## 75                    16.3
## 76                     1.5
## 77                    23.3
## 78                    18.0
## 79                     3.1
## 80                    26.8
## 81                    10.7
## 82                    16.0
## 83                    23.4
## 84                    28.3
## 85                    16.7
## 86                    14.2
## 87                     5.9
## 88                     9.5
## 89                    13.0
## 90                    22.2
## 91                    11.6
## 92                    37.1
## 93                    20.8
## 94                    14.9
## 95                    17.3
## 96                    11.0
## 97                    39.6
## 98                     4.7
## 99                    37.7
## 100                   29.5
## 101                   36.9
## 102                   31.4
## 103                   39.1
## 104                   13.3
## 105                   39.6
## 106                    9.6
## 107                   19.7
## 108                   19.3
## 109                    2.7
## 110                   16.8
## 111                   22.3
## 112                   27.1
## 113                   22.1
## 114                   31.3
## 115                    0.0
## 116                   12.0
## 117                   14.5
## 118                   57.5
## 119                    6.1
## 120                   19.9
## 121                   42.7
## 122                   34.0
## 123                   12.4
## 124                   25.3
## 125                   18.7
## 126                   27.7
## 127                   40.7
## 128                   38.0
## 129                    5.8
## 130                   23.8
## 131                   11.8
## 132                   14.7
## 133                   43.6
## 134                   28.5
## 135                   12.4
## 136                   15.2
## 137                   36.0
## 138                    6.1
## 139                   33.3
## 140                   17.6
## 141                    0.0
## 142                   24.7
## 143                   31.3
## 144                   14.4
## 145                   35.0
## 146                   11.8
## 147                   17.5
## 148                   23.5
## 149                   19.4
## 150                   11.5
## 151                   17.0
## 152                   24.3
## 153                    0.7
## 154                   12.7
## 155                   35.1
dim(human)
## [1] 155   9

Show a graphical overview of the data and show summaries of the variables in the data. Describe and interpret the outputs, commenting on the distributions of the variables and the relationships between them.

library(GGally)
library(ggplot2)
p <- ggpairs(human, mapping = aes(), lower = list(combo = wrap("facethist", bins = 20)))
p

Based on the summary data and the ggpairs we can see that ExpectYrsEdu is almost normally distributed, labM quite. The rest of the data is not. There is a good correlation on BirthRate and MMratio, ExpectYrsEdu and MMRatio, ExpectYrsEdu and BirthRate, LifeExpect and MMRatio, LifeExpect and edu2F, MMRatio and edu2F.

summary(human)
##        X             edu2F             labM         LifeExpect   
##  Min.   :  1.0   Min.   :  0.90   Min.   :44.20   Min.   :49.00  
##  1st Qu.: 39.5   1st Qu.: 27.15   1st Qu.:68.70   1st Qu.:66.30  
##  Median : 78.0   Median : 56.60   Median :74.80   Median :74.20  
##  Mean   : 78.0   Mean   : 55.37   Mean   :74.38   Mean   :71.65  
##  3rd Qu.:116.5   3rd Qu.: 85.15   3rd Qu.:80.60   3rd Qu.:77.25  
##  Max.   :155.0   Max.   :100.00   Max.   :95.50   Max.   :83.50  
##   ExpectYrsEd     GNIperCapita       MMRatio         BirthRate     
##  Min.   : 5.40   Min.   :   581   Min.   :   1.0   Min.   :  0.60  
##  1st Qu.:11.25   1st Qu.:  4198   1st Qu.:  11.5   1st Qu.: 12.65  
##  Median :13.50   Median : 12040   Median :  49.0   Median : 33.60  
##  Mean   :13.18   Mean   : 17628   Mean   : 149.1   Mean   : 47.16  
##  3rd Qu.:15.20   3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95  
##  Max.   :20.20   Max.   :123124   Max.   :1100.0   Max.   :204.80  
##  PercRepresinParliament
##  Min.   : 0.00         
##  1st Qu.:12.40         
##  Median :19.30         
##  Mean   :20.91         
##  3rd Qu.:27.95         
##  Max.   :57.50
pairs(human)

Similar interpretations can be made from the pairs as we did on the previous phase.

library(corrplot)
library(tidyverse)
# calculate the correlation matrix and round it
cor_matrix<-cor(human) 

# print the correlation matrix
corrplot(cor_matrix, method="circle")

The correlations can be seen more clearly on a corrplot chart. A strong positive correlation can be seen between BirthRate and MMRatio, ExpectedYrsEd and edu2F, ExpectedYrsEd and LifeExpected, LifeExpect and edu2F, LifeExpect and ExpectYrsEdu. A strong negative correlation can be seen between BirthRate and edu2F, BirthRate and LifeExpectYrs, BirthRate and LifeExpectYrsEd, MMRatio and edu2F, MMRatio and LifeExpectYrs, MMRatio and LifeExpectYrsEdu.

Perform principal component analysis (PCA) on the not standardized human data. Show the variability captured by the principal components. Draw a biplot displaying the observations by the first two principal components.

pca_human <- prcomp(human)
biplot(pca_human, choices = 1:2, cex = c(0.8, 1), col = c("grey40", "deeppink2"))
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

This really does not tell us anything since all the arrows are a mess. We need to standardize the variables in the human data and repeat the above analysis.

human_std <- scale(human)
pca_human_std <- prcomp(human_std)
biplot(pca_human_std, choices = 1:2, cex = c(0.8, 1), col = c("grey40", "deeppink2"))

The same correlation as described earlier can be seen here. Arrows pointing to the same direction are positive correlation and the closer they are the stronger the correlation is. Arrows pointing to opposite directions identify negative correlation.

The angle between a variable and a PC axis can be interpret as the correlation between the two. The length of the arrows are proportional to the standard deviations of the variables

Create a summary of the PCA, rounded percentages of variance captured by each PC

s <- summary(pca_human_std)
pca_pr <- round(100*s$importance[2, ], digits = 5)
pca_pr
##    PC1    PC2    PC3    PC4    PC5    PC6    PC7    PC8    PC9 
## 52.527 11.767 10.894  9.632  5.616  3.553  2.628  2.175  1.208

PC1 captures 53% and PC2 12% of the variables.

pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")
biplot(pca_human_std, cex = c(0.8, 1), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2])

This diagram shows the PC1 and PC2 with their importance.

Next we will load the tea dataset from the package FactomineR and explore the data briefly.

library(FactoMineR)
data("tea")
str(tea)
## 'data.frame':    300 obs. of  36 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ age             : int  39 45 47 23 48 21 37 36 40 37 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
##  $ frequency       : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...

There are 300 rows and 36 variables in the tea dataset. Age is of type int but the rest of the variables are of type factor.

summary(tea)
##          breakfast           tea.time          evening          lunch    
##  breakfast    :144   Not.tea time:131   evening    :103   lunch    : 44  
##  Not.breakfast:156   tea time    :169   Not.evening:197   Not.lunch:256  
##                                                                          
##                                                                          
##                                                                          
##                                                                          
##                                                                          
##         dinner           always          home           work    
##  dinner    : 21   always    :103   home    :291   Not.work:213  
##  Not.dinner:279   Not.always:197   Not.home:  9   work    : 87  
##                                                                 
##                                                                 
##                                                                 
##                                                                 
##                                                                 
##         tearoom           friends          resto          pub     
##  Not.tearoom:242   friends    :196   Not.resto:221   Not.pub:237  
##  tearoom    : 58   Not.friends:104   resto    : 79   pub    : 63  
##                                                                   
##                                                                   
##                                                                   
##                                                                   
##                                                                   
##         Tea         How           sugar                     how     
##  black    : 74   alone:195   No.sugar:155   tea bag           :170  
##  Earl Grey:193   lemon: 33   sugar   :145   tea bag+unpackaged: 94  
##  green    : 33   milk : 63                  unpackaged        : 36  
##                  other:  9                                          
##                                                                     
##                                                                     
##                                                                     
##                   where                 price          age        sex    
##  chain store         :192   p_branded      : 95   Min.   :15.00   F:178  
##  chain store+tea shop: 78   p_cheap        :  7   1st Qu.:23.00   M:122  
##  tea shop            : 30   p_private label: 21   Median :32.00          
##                             p_unknown      : 12   Mean   :37.05          
##                             p_upscale      : 53   3rd Qu.:48.00          
##                             p_variable     :112   Max.   :90.00          
##                                                                          
##            SPC               Sport       age_Q          frequency  
##  employee    :59   Not.sportsman:121   15-24:92   1/day      : 95  
##  middle      :40   sportsman    :179   25-34:69   1 to 2/week: 44  
##  non-worker  :64                       35-44:40   +2/day     :127  
##  other worker:20                       45-59:61   3 to 6/week: 34  
##  senior      :35                       +60  :38                    
##  student     :70                                                   
##  workman     :12                                                   
##              escape.exoticism           spirituality        healthy   
##  escape-exoticism    :142     Not.spirituality:206   healthy    :210  
##  Not.escape-exoticism:158     spirituality    : 94   Not.healthy: 90  
##                                                                       
##                                                                       
##                                                                       
##                                                                       
##                                                                       
##          diuretic             friendliness            iron.absorption
##  diuretic    :174   friendliness    :242   iron absorption    : 31   
##  Not.diuretic:126   Not.friendliness: 58   Not.iron absorption:269   
##                                                                      
##                                                                      
##                                                                      
##                                                                      
##                                                                      
##          feminine             sophisticated        slimming          exciting  
##  feminine    :129   Not.sophisticated: 85   No.slimming:255   exciting   :116  
##  Not.feminine:171   sophisticated    :215   slimming   : 45   No.exciting:184  
##                                                                                
##                                                                                
##                                                                                
##                                                                                
##                                                                                
##         relaxing              effect.on.health
##  No.relaxing:113   effect on health   : 66    
##  relaxing   :187   No.effect on health:234    
##                                               
##                                               
##                                               
##                                               
## 

There are many variables (36) which makes analyzing the data more difficult.

library(GGally)
library(ggplot2)
t <- ggpairs(tea, mapping = aes(), lower = list(combo = wrap("facethist", bins = 20)))
t

Obviously there are too many variables to make any sense of the data. We will choose some of them to keep, and we ignore the rest.

keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")
tea_time <- select(tea, one_of(keep_columns))
summary(tea_time)
##         Tea         How                      how           sugar    
##  black    : 74   alone:195   tea bag           :170   No.sugar:155  
##  Earl Grey:193   lemon: 33   tea bag+unpackaged: 94   sugar   :145  
##  green    : 33   milk : 63   unpackaged        : 36                 
##                  other:  9                                          
##                   where           lunch    
##  chain store         :192   lunch    : 44  
##  chain store+tea shop: 78   Not.lunch:256  
##  tea shop            : 30                  
## 

There are three different kinds of tea, you can drink it alone, with lemon, with milk, or with something else. It can be packed in a tea bag, unpackaged tea bag, or it can be unpackaged. The tea can be drunk with or without sugar. The tea can be drunk in a chain store, a tea shop or a combination of those two. It can be drunk combined with the lunch or separate. The most drank tea is Earl Grey, alone, in a tea bag, without suga, in a chain store not combined with a lunch.

library(tidyr); library(dplyr); library(ggplot2)
gather(tea_time) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))
## Warning: attributes are not identical across measure variables;
## they will be dropped

This is a graphical way of showing the same interpretation. These graphs can be used to identify variable categories with a very low frequency. These types of variables can distort the analysis and should be removed.

Multiple Correspondence Analysis, a data analysis technique for nominal categorical data for detecting and representing underlying structures in a dataset. Data is represented as points in a low-dimensional Euclidean space. The graphs above can be used to identify variable categories with a very low frequency. These types of variables can distort the analysis and should be removed.

mca <- MCA(tea_time, graph = FALSE)
summary(mca)
## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6   Dim.7
## Variance               0.279   0.261   0.219   0.189   0.177   0.156   0.144
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519   7.841
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953  77.794
##                        Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.141   0.117   0.087   0.062
## % of var.              7.705   6.392   4.724   3.385
## Cumulative % of var.  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr    cos2
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139   0.003
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626   0.027
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111   0.107
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841   0.127
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979   0.035
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990   0.020
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347   0.102
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459   0.161
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968   0.478
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898   0.141
##                     v.test     Dim.3     ctr    cos2  v.test  
## black                0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            2.867 |   0.433   9.160   0.338  10.053 |
## green               -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone               -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                3.226 |   1.329  14.771   0.218   8.081 |
## milk                 2.422 |   0.013   0.003   0.000   0.116 |
## other                5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag             -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged          -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |

Eigenvalues, the percentage of variances explained by each principal component.

                   Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6   Dim.7   Dim.8   Dim.9  Dim.10  Dim.11
                   

Variance 0.279 0.261 0.219 0.189 0.177 0.156 0.144 0.141 0.117 0.087 0.062

% of var. 15.238 14.232 11.964 10.333 9.667 8.519 7.841 7.705 6.392 4.724 3.385

Cumulative % of var. 15.238 29.471 41.435 51.768 61.434 69.953 77.794 85.500 91.891 96.615 100.000

Dim1 explains 15%, Dim2 14%, Dim3 12% and so on. Dim1 to Dim4 together cover more than 50% of the variance.

plot(mca, invisible=c("ind"), habillage = "quali")

Different colors identify different variable categories, and the values of them are shown as values. The distance between any points gives a measure of their similarity (or dissimilarity). Points with similar profile are closed on the factor map. As analyzed earlier non lunch is is more similar than lunch, or chain store than tea shop.


Analysis of longitudinal data

# Load and look at the data sets
library(tidyr)
library(dplyr)
BPRSL <- read.csv("C:/Users/Heli/Heli/HY/Introduction to Open Data Science/Projects/IODS-project/data\\BPRSL.csv", sep=",", dec = ".", header=TRUE)
dim(BPRSL)
## [1] 360   6
glimpse(BPRSL)
## Rows: 360
## Columns: 6
## $ X         <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17...
## $ treatment <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
## $ subject   <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17...
## $ weeks     <chr> "week0", "week0", "week0", "week0", "week0", "week0", "we...
## $ bprs      <int> 42, 58, 54, 55, 72, 48, 71, 30, 41, 57, 30, 55, 36, 38, 6...
## $ week      <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
str(BPRSL)
## 'data.frame':    360 obs. of  6 variables:
##  $ X        : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ treatment: int  1 1 1 1 1 1 1 1 1 1 ...
##  $ subject  : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ weeks    : chr  "week0" "week0" "week0" "week0" ...
##  $ bprs     : int  42 58 54 55 72 48 71 30 41 57 ...
##  $ week     : int  0 0 0 0 0 0 0 0 0 0 ...
RATSL <- read.csv("C:/Users/Heli/Heli/HY/Introduction to Open Data Science/Projects/IODS-project/data\\RATSL.csv", sep=",", dec = ".", header=TRUE)
dim(RATSL)
## [1] 176   6
glimpse(RATSL)
## Rows: 176
## Columns: 6
## $ X      <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 1...
## $ ID     <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2,...
## $ Group  <int> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 1, 1, 1, 1, ...
## $ WD     <chr> "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1...
## $ Weight <int> 240, 225, 245, 260, 255, 260, 275, 245, 410, 405, 445, 555, ...
## $ Time   <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8, 8, 8, 8, ...
str(RATSL)
## 'data.frame':    176 obs. of  6 variables:
##  $ X     : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ ID    : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ Group : int  1 1 1 1 1 1 1 1 2 2 ...
##  $ WD    : chr  "WD1" "WD1" "WD1" "WD1" ...
##  $ Weight: int  240 225 245 260 255 260 275 245 410 405 ...
##  $ Time  : int  1 1 1 1 1 1 1 1 1 1 ...

Implement the analyses of Chapter 8 of MABS using the RATS data. Graphical Displays and Summary Measure Approach

BPRSL$treatment <- factor(BPRSL$treatment)
BPRSL$subject <- factor(BPRSL$subject)
RATSL$Group <- factor(RATSL$Group)
RATSL$ID <- factor(RATSL$ID)
#Access the package ggplot2
library(ggplot2)

# Draw the plot
ggplot(RATSL, aes(x = Time, y = Weight, linetype = ID)) +
  geom_line() +
  scale_linetype_manual(values = rep(1:10, times=4)) +
  facet_grid(. ~ Group, labeller = label_both) +
  theme(legend.position = "none") + 
  scale_y_continuous(limits = c(min(RATSL$Weight), max(RATSL$Weight)))

Individual response profiles by group for the RATS data over time and weight. The weight increases in all groups for all IDs. For some individuals in groups 1 and 3 the weight starts to decrease at the end of the experiment.

We will standardize the weight.

RATSL <- RATSL %>%
  group_by(Time) %>%
  mutate(stdweight = (Weight - mean(Weight))/sd(Weight) ) %>%
  
  ungroup()

glimpse(RATSL)
## Rows: 176
## Columns: 7
## $ X         <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17...
## $ ID        <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1,...
## $ Group     <fct> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 1, 1, 1, ...
## $ WD        <chr> "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "...
## $ Weight    <int> 240, 225, 245, 260, 255, 260, 275, 245, 410, 405, 445, 55...
## $ Time      <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8, 8, 8, ...
## $ stdweight <dbl> -1.0011429, -1.1203857, -0.9613953, -0.8421525, -0.881900...
ggplot(RATSL, aes(x = Time, y = stdweight, linetype = ID)) +
  geom_line() +
  scale_linetype_manual(values = rep(1:10, times=4)) +
  facet_grid(. ~ Group, labeller = label_both) +
  scale_y_continuous(name = "standardized weight")

Using the standardized data we can see that even in group 2 there are two individuals with a decreasing weight, and that decreasing is not only in the end of the time axis but during the whole experiment.

Summary graphs

n <- RATSL$Time %>% unique() %>% length()
RATSS <- RATSL %>%
  group_by(Group, Time) %>%
  summarise( mean = mean(Weight), se = sd(Weight)/sqrt(n) ) %>%
  ungroup()
## `summarise()` regrouping output by 'Group' (override with `.groups` argument)
glimpse(RATSS)
## Rows: 33
## Columns: 4
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2...
## $ Time  <int> 1, 8, 15, 22, 29, 36, 43, 44, 50, 57, 64, 1, 8, 15, 22, 29, 3...
## $ mean  <dbl> 250.625, 255.000, 254.375, 261.875, 264.625, 265.000, 267.375...
## $ se    <dbl> 4.589478, 3.947710, 3.460116, 4.100800, 3.333956, 3.552939, 3...
ggplot(RATSS, aes(x = Time, y = mean, linetype = Group, shape = Group)) +
  geom_line() +
  scale_linetype_manual(values = c(1,3,3)) +
  geom_point(size=3) +
  scale_shape_manual(values = c(1,2,3)) +
  geom_errorbar(aes(ymin = mean - se, ymax = mean + se, linetype="1"), width=0.3) +
  scale_y_continuous(name = "mean(Weight) +/- se(Weight)")

Mean response profiles for the three groups in the RATS data. The mean weight of groups 2 and 3 grow similarly whereas in group 1 slower.

We will create a summary data by group and ID with mean as the summary variable (ignoring baseline Time 1).

RATSLSS <- RATSL %>%
  filter(Time > 1) %>%
  group_by(Group, ID) %>%
  summarise( mean=mean(Weight) ) %>%
  ungroup()
## `summarise()` regrouping output by 'Group' (override with `.groups` argument)
ggplot(RATSLSS, aes(x = Group, y = mean)) +
  geom_boxplot() +
  stat_summary(fun.y = "mean", geom = "point", shape=23, size=4, fill = "white") +
  scale_y_continuous(name = "mean(Weight), over time")
## Warning: `fun.y` is deprecated. Use `fun` instead.

Group 2 has the biggest difference on the weight over time. All the groups have an outlier. We will create a new dataset without these outliers.

RATSLSS2 <- filter(RATSLSS, (Group==1 & mean > 250)|(Group==2 & mean < 550)| (Group==3 & mean > 500))

RATSLSS2
## # A tibble: 13 x 3
##    Group ID     mean
##    <fct> <fct> <dbl>
##  1 1     1      263.
##  2 1     3      262.
##  3 1     4      267.
##  4 1     5      271.
##  5 1     6      276.
##  6 1     7      275.
##  7 1     8      268.
##  8 2     9      444.
##  9 2     10     458.
## 10 2     11     456.
## 11 3     14     536.
## 12 3     15     542.
## 13 3     16     536.
ggplot(RATSLSS2, aes(x = Group, y = mean)) +
  geom_boxplot() +
  stat_summary(fun.y = "mean", geom = "point", shape=23, size=4, fill = "white") +
  scale_y_continuous(name = "mean(Weight), over time")
## Warning: `fun.y` is deprecated. Use `fun` instead.

Three groups in a boxplot without outliers.

After seeing the graphs let’s compare the groups using a more formal way: t-test. Let choose groups 2 and 3 to see if they are similar or not.

RATSLSS23 <- filter(RATSLSS2,(Group==2| Group==3))
RATSLSS23$Group <- factor(RATSLSS23$Group)
str(RATSLSS23)
## tibble [6 x 3] (S3: tbl_df/tbl/data.frame)
##  $ Group: Factor w/ 2 levels "2","3": 1 1 1 2 2 2
##  $ ID   : Factor w/ 16 levels "1","2","3","4",..: 9 10 11 14 15 16
##  $ mean : num [1:6] 444 458 456 536 542 ...
t.test(mean ~ Group, data = RATSLSS23, var.equal = TRUE)
## 
##  Two Sample t-test
## 
## data:  mean by Group
## t = -18.235, df = 4, p-value = 5.32e-05
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -98.94088 -72.79246
## sample estimates:
## mean in group 2 mean in group 3 
##        452.4000        538.2667

t-test statistic value -18.24, degrees of freedom 4, p-value is the significance level of the t-test. 95% confidence interval for Group 2 is -98.94 and for Group 3 is -72.79. Mean in group 2 is 452.40 and in group 3 538.27. The greater the magnitude of T, the greater the evidence against the null hypothesis. The lower the p-value, the greater the statistical significance of the observed difference. Null hypothesis can be rejected.

We will continue with the original dataset of 3 groups. Fit the linear model with the mean as the response.

fit <- lm(mean ~ Group, data = RATSLSS2)
anova(fit)
## Analysis of Variance Table
## 
## Response: mean
##           Df Sum Sq Mean Sq F value    Pr(>F)    
## Group      2 176917   88458  2836.4 1.687e-14 ***
## Residuals 10    312      31                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

one-way analysis of variance (ANOVA). In one-way ANOVA, the data is organized into several groups base on one single grouping variable (also called factor variable).

ANOVA test hypotheses:

Null hypothesis: the means of the different groups are the same Alternative hypothesis: At least one sample mean is not equal to the others.

The model summary first lists the independent variables being tested in the model, in this example it is Group. All of the variation that is not explained by the independent variables is called residual variance and shown in Residuals line on the summary. The Df column displays the degrees of freedom for the independent variable Group to be 2 (the number of levels in the variable minus 1), and the residuals to be 10 (the total number of observations minus one and minus the number of levels in the independent variables). The Sum Sq column displays the sum of squares, the total variation between the group means and the overall mean. For the variable Group it is 176917 and for the residuals it is 312. The Mean Sq column is the mean of the sum of squares. It is calculated by dividing the sum of squares by the degrees of freedom for each parameter. For the varible Group it is 88458 and for the residuals 31. The F-value is the test statistic from the F test. This is the mean square of each independent variable divided by the mean square of the residuals. The larger the F value, the more likely it is that the variation caused by the independent variable is real and not due to chance. For the varible Group the value is 2836.4 showing that the variation caused by Group is real. The Pr(>F) is the p-value of the F-statistic. It shows how likely it is that the F-value calculated from the test would have occurred if the null hypothesis of no difference among group means were true. The p-value is very low (p < 0.001) so we can say that the Group has a real impact. Null hypothesis can be rejected.

Let’s start working on BPRSL dataset and ch 9. Linear Mixed Effects Models for Normal Response Variables Two examples of linear mixed effects models: the random intercept model and the random intercept and slope model.

ggplot(BPRSL, aes(x = week, y = bprs, group = treatment)) +
  geom_line()

ggplot(BPRSL, aes(x = week, y = bprs, group = treatment)) +
  geom_line(aes(linetype = treatment)) +
  scale_x_continuous(name = "Week", breaks = seq(0, 60, 10)) +
  scale_y_continuous(name = "bprs") +
  theme(legend.position = "top")

Create a regression model

BPRSL_reg <- lm(bprs ~ week + treatment, data = BPRSL)
summary(BPRSL_reg)
## 
## Call:
## lm(formula = bprs ~ week + treatment, data = BPRSL)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -22.454  -8.965  -3.196   7.002  50.244 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  46.4539     1.3670  33.982   <2e-16 ***
## week         -2.2704     0.2524  -8.995   <2e-16 ***
## treatment2    0.5722     1.3034   0.439    0.661    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 12.37 on 357 degrees of freedom
## Multiple R-squared:  0.1851, Adjusted R-squared:  0.1806 
## F-statistic: 40.55 on 2 and 357 DF,  p-value: < 2.2e-16

The t-value measures the size of the difference relative to the variation, so the bigger the number the greater the evidence against the null hypothesis. Week has a t-value great enough to refute the 0-hypotheses. p-value (Pr) is less than 0.05 for bith week and treatment2. Based on the results, null hypotheses can be refute for week. It cannot be refute for treatments.

Residual Standard Error: Standard deviation of residuals / errors of the regression model. Multiple R-Squared (0.19): Percent of the variance of exam intact after subtracting the error of the model. Adjusted R-Squared (0.18): how well the model fits the data, i.e. the percentage of the dependent variable variation that the linear model explains (ranging between 0 and 1). The R-squared is quite low.

The Random Intercept Model

library(lme4)
## Loading required package: Matrix
## 
## Attaching package: 'Matrix'
## The following objects are masked from 'package:tidyr':
## 
##     expand, pack, unpack
BPRSL_ref <- lmer(bprs ~ week + treatment + (1 | subject), data = BPRSL, REML = FALSE)
summary(BPRSL_ref)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ week + treatment + (1 | subject)
##    Data: BPRSL
## 
##      AIC      BIC   logLik deviance df.resid 
##   2748.7   2768.1  -1369.4   2738.7      355 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.0481 -0.6749 -0.1361  0.4813  3.4855 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  subject  (Intercept)  47.41    6.885  
##  Residual             104.21   10.208  
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)  46.4539     1.9090  24.334
## week         -2.2704     0.2084 -10.896
## treatment2    0.5722     1.0761   0.532
## 
## Correlation of Fixed Effects:
##            (Intr) week  
## week       -0.437       
## treatment2 -0.282  0.000

The Akaike Information Criterion (AIC) is a method for scoring and selecting a model: the smaller the better. The value or AIC is 2748.7. The Bayesian Information Criterion (BIC) is another method for scoring and selecting a model: the smaller the better. The value for BIC is 2768.1. Log-Likelihood (logLik)

The average bprs is 46.45, a week lowers it by 2.27, and treatment2 by 0.28.

T-value is now bigger for week than it was before.

Random Intercept and Random Slope Model

BPRSL_ref1 <- lmer(bprs ~ week + treatment + (week | subject), data = BPRSL, REML = FALSE)
summary(BPRSL_ref1)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ week + treatment + (week | subject)
##    Data: BPRSL
## 
##      AIC      BIC   logLik deviance df.resid 
##   2745.4   2772.6  -1365.7   2731.4      353 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.8919 -0.6194 -0.0691  0.5531  3.7976 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr 
##  subject  (Intercept) 64.8222  8.0512        
##           week         0.9609  0.9802   -0.51
##  Residual             97.4305  9.8707        
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)  46.4539     2.1052  22.066
## week         -2.2704     0.2977  -7.626
## treatment2    0.5722     1.0405   0.550
## 
## Correlation of Fixed Effects:
##            (Intr) week  
## week       -0.582       
## treatment2 -0.247  0.000
anova(BPRSL_ref1, BPRSL_ref)
## Data: BPRSL
## Models:
## BPRSL_ref: bprs ~ week + treatment + (1 | subject)
## BPRSL_ref1: bprs ~ week + treatment + (week | subject)
##            npar    AIC    BIC  logLik deviance  Chisq Df Pr(>Chisq)  
## BPRSL_ref     5 2748.7 2768.1 -1369.4   2738.7                       
## BPRSL_ref1    7 2745.4 2772.6 -1365.7   2731.4 7.2721  2    0.02636 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The Random Intercept Model:BPRSL_ref Random Intercept and Random Slope Model:BPRSL_ref1 Two-way ANOVA test hypotheses: BPRSL_ref1 is slightly better.

Significance is low.

Null hypothesis can be rejected.

Random Intercept and Random Slope Model with interaction

BPRSL_ref2 <- lmer(bprs ~ week * treatment + (week | subject), data = BPRSL, REML = FALSE)
summary(BPRSL_ref2)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ week * treatment + (week | subject)
##    Data: BPRSL
## 
##      AIC      BIC   logLik deviance df.resid 
##   2744.3   2775.4  -1364.1   2728.3      352 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.0512 -0.6271 -0.0768  0.5288  3.9260 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr 
##  subject  (Intercept) 64.9964  8.0620        
##           week         0.9687  0.9842   -0.51
##  Residual             96.4707  9.8220        
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##                 Estimate Std. Error t value
## (Intercept)      47.8856     2.2521  21.262
## week             -2.6283     0.3589  -7.323
## treatment2       -2.2911     1.9090  -1.200
## week:treatment2   0.7158     0.4010   1.785
## 
## Correlation of Fixed Effects:
##             (Intr) week   trtmn2
## week        -0.650              
## treatment2  -0.424  0.469       
## wek:trtmnt2  0.356 -0.559 -0.840

The t-value for week is -7.32. Week has a t-value great enough to refute the 0-hypotheses. Based on the results, null hypotheses can be refute for week. It cannot be refute for treatments.

An ANOVA test on the two models Random Intercept and Random Slope Model: BPRSL_ref1 and Random Intercept and Random Slope Model with interaction: BPRSL_ref2

anova(BPRSL_ref2, BPRSL_ref1)
## Data: BPRSL
## Models:
## BPRSL_ref1: bprs ~ week + treatment + (week | subject)
## BPRSL_ref2: bprs ~ week * treatment + (week | subject)
##            npar    AIC    BIC  logLik deviance  Chisq Df Pr(>Chisq)  
## BPRSL_ref1    7 2745.4 2772.6 -1365.7   2731.4                       
## BPRSL_ref2    8 2744.3 2775.4 -1364.1   2728.3 3.1712  1    0.07495 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Significance is very low. BPRSL_ref2 is slightly better.

ggplot(BPRSL, aes(x = week, y = bprs, group = treatment)) +
  geom_line(aes(linetype = treatment)) +
  scale_x_continuous(name = "Week", breaks = seq(0, 60, 20)) +
  scale_y_continuous(name = "bprs") +
  theme(legend.position = "top")

The two treatments have the same directions but otherwise they behave differently. Week 0 is different even with the direction. Treatment 2 has a higher bprs value on week 0 but both have the same value in the end.

Create a vector of the fitted values, new column fitted, and a plot

Fitted <- fitted(BPRSL_ref2)


BPRSL <- BPRSL %>%
  mutate(Fitted)

ggplot(BPRSL, aes(x = week, y = Fitted, group = treatment)) +
  geom_line(aes(linetype = treatment)) +
  scale_x_continuous(name = "Week", breaks = seq(0, 60, 20)) +
  scale_y_continuous(name = "bprs") +
  theme(legend.position = "top")

Now the two treatments behave more similar. Treatment 2 starts with a lower bprs values but seems to have larger values as the weeks go by.